# Textual Entailment Recognition

Bert Base Uncased Finetuned Rte Run Trial3
Apache-2.0
A model fine-tuned based on bert-base-uncased for textual entailment recognition tasks, with an accuracy of 66.43%
Text Classification Transformers
B
BoranIsmet
59
1
Nli Distilroberta Base
Apache-2.0
A cross-encoder based on DistilRoBERTa for natural language inference tasks, capable of determining the relationship between sentence pairs (contradiction, entailment, or neutral).
Text Classification English
N
cross-encoder
26.81k
24
Deberta V3 Large Finetuned Mnli
MIT
DeBERTa-v3-large model fine-tuned on GLUE MNLI dataset for natural language inference tasks, achieving 90% accuracy on the validation set
Text Classification Transformers English
D
mrm8488
31
2
Roberta Large Mnli
Roberta-large model trained on the MNLI dataset for natural language inference tasks.
Large Language Model Transformers
R
prajjwal1
31
0
Bert Base Uncased Finetuned Rte
Apache-2.0
A text classification model fine-tuned on the GLUE RTE task based on the BERT base model
Text Classification Transformers
B
anirudh21
86
0
Distilbert Base Uncased Finetuned Rte
Apache-2.0
This model is a text classification model fine-tuned on the GLUE dataset's RTE task based on DistilBERT, with an accuracy of 61.73%
Text Classification Transformers
D
anirudh21
20
0
Distilcamembert Base Nli
MIT
A lightweight model fine-tuned for French natural language inference tasks based on DistilCamemBERT, with 50% faster inference speed than the original CamemBERT
Text Classification Transformers Supports Multiple Languages
D
cmarkea
6,327
11
Bert Base Uncased Mnli
This is a text classification model based on the bert-base-uncased pre-trained model and fine-tuned on the MultiNLI dataset
Text Classification English
B
ishan
2,506
2
Albert Base V2 Finetuned Rte
Apache-2.0
This model is a text classification model based on the ALBERT base version (albert-base-v2) fine-tuned on the RTE task of the GLUE dataset, primarily used for textual entailment recognition tasks.
Text Classification Transformers
A
anirudh21
15
0
Nli Deberta V3 Base
Apache-2.0
A cross-encoder model based on microsoft/deberta-v3-base, trained for natural language inference tasks, capable of determining the relationship between sentence pairs (contradiction, entailment, or neutral).
Text Classification Transformers English
N
cross-encoder
65.55k
31
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase